final rank
The Swiss Gambit
Cseh, Ágnes, Führlich, Pascal, Lenzner, Pascal
In each round of a Swiss-system tournament, players of similar score are paired against each other. An intentional early loss therefore might lead to weaker opponents in later rounds and thus to a better final tournament result - a phenomenon known as the Swiss Gambit. To the best of our knowledge it is an open question whether this strategy can actually work. This paper provides answers based on an empirical agent-based analysis for the most prominent application area of the Swiss-system format, namely chess tournaments. We simulate realistic tournaments by employing the official FIDE pairing system for computing the player pairings in each round. We show that even though gambits are widely possible in Swiss-system chess tournaments, profiting from them requires a high degree of predictability of match results. Moreover, even if a Swiss Gambit succeeds, the obtained improvement in the final ranking is limited. Our experiments prove that counting on a Swiss Gambit is indeed a lot more of a risky gambit than a reliable strategy to improve the final rank.
- Europe > Germany > Brandenburg > Potsdam (0.05)
- Europe > Germany > Bavaria > Upper Franconia > Bayreuth (0.04)
- Europe > Hungary > Budapest > Budapest (0.04)
- (7 more...)
Prediction of the final rank of Players in PUBG with the optimal number of features
Sen, Diptakshi, Roy, Rupam Kumar, Majumdar, Ritajit, Chatterjee, Kingshuk, Ganguly, Debayan
PUBG is an online video game that has become very popular among the youths in recent years. Final rank, which indicates the performance of a player, is one of the most important feature for this game. This paper focuses on predicting the final rank of the players based on their skills and abilities. In this paper we have used different machine learning algorithms to predict the final rank of the players on a dataset obtained from kaggle which has 29 features. Using the correlation heatmap,we have varied the number of features used for the model. Out of these models GBR and LGBM have given the best result with the accuracy of 91.63% and 91.26% respectively for 14 features and the accuracy of 90.54% and 90.01% for 8 features. Although the accuracy of the models with 14 features is slightly better than 8 features, the empirical time taken by 8 features is 1.4x lesser than 14 features for LGBM and 1.5x lesser for GBR. Furthermore, reducing the number of features any more significantly hampers the performance of all the ML models. Therefore, we conclude that 8 is the optimal number of features that can be used to predict the final rank of a player in PUBG with high accuracy and low run-time.